5 research outputs found

    Multimedia Analysis and Access of Ancient Maya Epigraphy

    Get PDF
    This article presents an integrated framework for multimedia access and analysis of ancient Maya epigraphic resources, which is developed as an interdisciplinary effort involving epigraphers (someone who deciphers ancient inscriptions) and computer scientists. Our work includes several contributions: a definition of consistent conventions to generate high-quality representations of Maya hieroglyphs from the three most valuable ancient codices, which currently reside in European museums and institutions; a digital repository system for glyph annotation and management; as well as automatic glyph retrieval and classification methods. We study the combination of statistical Maya language models and shape representation within a hieroglyph retrieval system, the impact of applying language models extracted from different hieroglyphic resources on various data types, and the effect of shape representation choices for glyph classification. A novel Maya hieroglyph data set is given, which can be used for shape analysis benchmarks, and also to study the ancient Maya writing system

    Analyzing and Visualizing Ancient Maya Hieroglyphics Using Shape: from Computer Vision to Digital Humanities

    No full text
    Maya hieroglyphic analysis requires epigraphers to spend a significant amount of time browsing existing catalogs to identify individual glyphs. Automatic Maya glyph analysis provides an efficient way to assist scholars’ daily work. We introduce the Histogram of Orientation Shape Context (HOOSC) shape descriptor to the Digital Humanities community. We discuss key issues for practitioners and study the effect that certain parameters have on the performance of the descriptor. Different HOOSC parameters are tested in an automatic ancient Maya hieroglyph retrieval system with two different settings, namely, when shape alone is considered and when glyph co-occurrence information is incorporated. Additionally, we developed a graph-based glyph visualization interface to facilitate efficient exploration and analysis of hieroglyphs. Specifically, a force-directed graph prototype is applied to visualize Maya glyphs based on their visual similarity. Each node in the graph represents a glyph image; the width of an edge indicates the visual similarity between the two according glyphs. The HOOSC descriptor is used to represent glyph shape, based on which pairwise glyph similarity scores are computed. To evaluate our tool, we designed evaluation tasks and questionnaires for two separate user groups, namely, a general public user group and an epigrapher scholar group. Evaluation results and feedback from both groups show that our tool provides intuitive access to explore and discover the Maya hieroglyphic writing, and could potentially facilitate epigraphy work. The positive evaluation results and feedback further hint the practical value of the HOOSC descriptor

    Searching the Past: An Improved Shape Descriptor to Retrieve Maya Hieroglyphs ∗

    No full text
    Archaeologists often spend significant time looking at traditional printed catalogs to identify and classify historical images. Our collaborative efforts between archaeologists and multimedia researchers seek to develop a tool to retrieve two specific types of ancient Maya visual information: hieroglyphs and iconographic elements. Towards that goal we present two contributions in this paper. The first one is the introduction and analysis of a new dataset of 3400+ Maya hieroglyphs, whose compilation involved manual search, annotation and segmentation by experts. This dataset presents several challenges for visual description and automatic retrieval as it is rich in complex visual details. The second and main contribution is the in-depth analysis of the Histogram Of Orientation Shape Context (HOOSC), and more precisely, the development of 4 improvements that were designed to handle the visual complexity of Maya hieroglyphs: open contours, mixture of thick and thin lines, hatches, large instance variability, and a variety of internal details. Experiments demonstrate that the adequate combination of our improvements to retrieve Maya hieroglyphs, provides results with roughly 20 % more precision compared to the original HOOSC descriptor. Complementary results with the MPEG-7 shape dataset validate (or not) the proposed improvements, showing that the design of appropriate descriptors depends on the nature of the shapes one deals with
    corecore